Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
We introduce an approach for the answer-aware question generation problem. Instead of only relying on the capability of strong pre-trained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules: a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pre-trained models for the question generation task. The code is also available (shorturl.at/lV567).
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper aims to improve the Warping Planer Object Detection Network (WPOD-Net) using feature engineering to increase accuracy. What problems are solved using the Warping Object Detection Network using feature engineering? More specifically, we think that it makes sense to add knowledge about edges in the image to enhance the information for determining the license plate contour of the original WPOD-Net model. The Sobel filter has been selected experimentally and acts as a Convolutional Neural Network layer, the edge information is combined with the old information of the original network to create the final embedding vector. The proposed model was compared with the original model on a set of data that we collected for evaluation. The results are evaluated through the Quadrilateral Intersection over Union value and demonstrate that the model has a significant improvement in performance.
translated by 谷歌翻译
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications. However, the enormous size of large-scale graphs hinders their applications under real-time inference scenarios. Although existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph is known and fixed. To speed up the inference in the inductive setting, we propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information. This could successfully avoid the redundant computation of feature propagation. Moreover, the trade-off between accuracy and inference latency can be flexibly controlled by simple hyper-parameters to match different latency constraints of application scenarios. To compensate for the potential inference accuracy loss, we further propose Inception Distillation to exploit the multi scale reception information and improve the inference performance. Extensive experiments are conducted on four public datasets with different scales and characteristics, and the experimental results show that our proposed inference acceleration framework outperforms the SOTA graph inference acceleration baselines in terms of both accuracy and efficiency. In particular, the advantage of our proposed method is more significant on larger-scale datasets, and our framework achieves $75\times$ inference speedup on the largest Ogbn-products dataset.
translated by 谷歌翻译
成功的人工智能系统通常需要大量标记的数据来从文档图像中提取信息。在本文中,我们研究了改善人工智能系统在理解文档图像中的性能的问题,尤其是在培训数据受到限制的情况下。我们通过使用加强学习提出一种新颖的填充方法来解决问题。我们的方法将信息提取模型视为策略网络,并使用策略梯度培训来更新模型,以最大程度地提高补充传统跨凝结损失的综合奖励功能。我们使用标签和专家反馈在四个数据集上进行的实验表明,我们的填充机制始终提高最先进的信息提取器的性能,尤其是在小型培训数据制度中。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)最近引起了人们的关注,这要归功于其在不使用培训数据的情况下将知识从教师网络转移到学生网络的吸引力。主要思想是使用发电机合成数据以培训学生。随着发电机的更新,合成数据的分布将发生变化。如果发电机和学生接受对手的训练,使学生忘记了先前一步获得的知识,则这种分配转换可能会很大。为了减轻这个问题,我们提出了一种简单而有效的方法,称为动量对抗蒸馏(MAD),该方法维持了发电机的指数移动平均值(EMA)副本,并使用发电机和EMA生成器的合成样品来培训学生。由于EMA发电机可以被视为发电机旧版本的合奏,并且与发电机相比,更新的更改通常会发生较小的变化,因此对其合成样本进行培训可以帮助学生回顾过去的知识,并防止学生适应太快的速度发电机的新更新。我们在六个基准数据集上进行的实验,包括ImageNet和Place365,表明MAD的性能优于竞争方法来处理大型分配转移问题。我们的方法还与现有的DFKD方法相比,甚至在某些情况下达到了最新的方法。
translated by 谷歌翻译
计算机系统的程序或功能中存在的软件漏洞是一个严重且至关重要的问题。通常,在由数百或数千个源代码语句组成的程序或功能中,只有很少的语句引起相应的漏洞。当前,在机器学习工具的协助下,专家在功能或程序级别上进行了脆弱性标签。将这种方法扩展到代码语句级别的成本更高和耗时,并且仍然是一个开放的问题。在本文中,我们提出了一种新颖的端到端深度学习方法,以识别与特定功能相关的脆弱性代码语句。受到现实世界中脆弱代码中观察到的特定结构的启发,我们首先利用相互信息来学习一组潜在变量,代表源代码语句与相应函数的漏洞的相关性。然后,我们提出了新颖的群集空间对比学习,以进一步改善与脆弱性相关的代码语句的强大选择过程。 200K+ C/C ++功能的实际数据集的实验结果表明,我们方法的优越性比其他最先进的基线相比。通常,我们的方法在无需监督的环境中在现实世界数据集上运行时,在Baselines上,VCP,VCA和TOP-10 ACC测量的较高性能在3 \%至14 \%之间。我们已发布的源代码样本可在\ href {https://github.com/vannguyennd/livuitcl} {https://github.com/vannguyennd/livuitcl。} {
translated by 谷歌翻译
由于计算机软件的普遍性,软件漏洞(SVS)已成为普遍,严重和至关重要的问题。已经提出了许多基于机器学习的方法来解决软件漏洞检测(SVD)问题。但是,关于SVD仍然存在两个开放和重大问题,就i)学习自动表示以提高SVD的预测性能,ii)解决常规需要专家的标签漏洞数据集的稀缺性数据集。在本文中,我们提出了一种新颖的端到端方法来解决这两个关键问题。我们首先利用自动表示学习,并具有深层域的适应性,以进行软件漏洞检测。然后,我们提出了一个新型的跨域内核分类器,利用最大额度额定原则,以显着改善从标记项目到未标记的项目的软件漏洞的传输学习过程。现实世界软件数据集的实验结果表明,我们提出的方法优于最先进的基准。简而言之,与使用数据集中的第二高方法相比,我们的方法在SVD中获得了更高的F1量化性能,这是SVD中最重要的度量,从1.83%到6.25%。我们已发布的源代码样本可在https://github.com/vannguyennd/dam2p上公开获取
translated by 谷歌翻译
我们介绍了第一项经验研究,研究了突发性检测对意向检测和插槽填充的下游任务的影响。我们对越南人进行了这项研究,这是一种低资源语言,没有以前的研究,也没有公共数据集可用于探索。首先,我们通过手动添加上下文不满并注释它们来扩展流利的越南意图检测和插槽填充phoatis。然后,我们使用强基线进行实验进行实验,以基于预训练的语言模型,以检测和关节意图检测和插槽填充。我们发现:(i)爆发对下游意图检测和插槽填充任务的性能产生负面影响,并且(ii)在探索环境中,预先训练的多语言语言模型XLM-R有助于产生更好的意图检测和插槽比预先训练的单语言模型phobert填充表演,这与在流利性环境中通常发现的相反。
translated by 谷歌翻译